perm filename LECT.SJU[P,JRA] blob sn#174032 filedate 1975-08-14 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00027 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	#1#  jan 28
C00013 00003	#2# jan 30
C00017 00004	#3# feb 4
C00023 00005	#4# feb 6
C00031 00006	#5# feb 11
C00034 00007	#6# feb 13
C00035 00008	#7# feb 18
C00036 00009	#8# feb 20
C00040 00010	#9# Feb25	 evaluation
C00043 00011			LECT 3:implementation --static
C00046 00012			LECT  4: compilation and dynamics
C00050 00013			LECT 5: reflections...In a pig's eye!!!
C00055 00014	may 6 globals and big pic.
C00058 00015	may 8  more bull
C00060 00016	may 13
C00061 00017	may 15
C00063 00018	cars
C00064 00019	organized bull
C00069 00020	mtc notes
C00074 00021	comments:
C00080 00022	homework
C00082 00023	Well this  seminar  was to  introduce you  to  the last  10 years  of
C00094 00024	outline
C00101 00025	basic structure of LCF
C00102 00026	mike's crap
C00106 00027	outline for mg no. 2
C00107 ENDMK
C⊗;
#1#  jan 28
MECHANICS
 Books vs. students
 take home exams
 machine probelms in about a month
 additional papers
    errata
    bibliographical material--some kind of library (beware of rip-off)
    general h.s.
    much latter missing sections

INTRODUCTION
Why study LISP? The  question is more fundamental: Why  high level language
for data structures

Why high -level for anything?
Why is Fortran- pl1-algol of value for numerical problems? They allow us to
formulate  our problems ar a  certain level of abstraction--  to talk about
our problems  in  mathematical  terms, discussing  numbers  and  functions,
rather  than  bit  patterns  in  the  machine.  (Ignore  approximations  in
computations in  algorithm or rep of numbers.)  Thus the primary benefit of
high-level  language  is  notational.    At  a  certain  level,  after  the
conceptualization  is  done,  we  must unusally  worry  about  efficiency.  
Efficiency studies can be the difference between a practiacl solution and a
disaster. But the basic understanding must come first. That is the level at
which we  shall discuss  our algorithms  for most  of the course.   We  are
asking for a level of abstraction similar to that which common mathematical
notation gives us for numerical  problems.  What are the benefits?  We hope
to demonstrate in the course that  this added abstractness allows us to see
moe   clearly  exactly  what   the  basic  structures   of  data  structure
manipulation are.  we will also  show that  the great  majority of what  is
modern c.s.  IS such algorithms.  What will be  impressive is that  at this
level of abstraction, we can more clearly cover more material with a better
understanding. (DEK vs. mix)

    math notation			??
      |					↑
      ↓					|
   efficiency of numerical   →  eff of data struct
   alg				alg

at this  level of abstraction  we are interested  in values  of expressions
rather  than the means by  which them are computed;  cf x=sin(y*3).  why  is
this good? difficult  and pressing problems of  c.s. of correctness  and/or
equivlanece of  progs are addressed  most easily  at this level.   more and
more frequently we are becoming able to write program manipulations systems
which transform progs equivalently... 

at this level we can develop the necessary tools to adi in construction of
correct programs.   the problems of c.s. are not efficiency, but correctness.

general  disclaimer:  we will  not  persue  theory  for the  shear  joy  of
inflictling pain; neither  shall we ignore its benefits when it pokes us in
the eye. Similarly  for efficiency;  we certainly can  talk about  relative
efficiency at  this higher  level, without  resorting to  counting cycles. 
This answers, or begins to answer: WHY HIGH LEVEL? WHy LISP? 

 why LISP
mechanics
  prog lang ideas, wo. hhs. syntax
   recursion, functional args and values.
   abstract data structures -- data structures as compared with storage structuttes
   structured programming --whatever that means

semantics of prog languages -- interpreter to understand  menaing of constructs
                                symbol tables and binding strategies.

**by this time you should be hooked ***

implementation -- realization of evaluator on "real" machine
                     lists, garbage collectors, storage reclamantion..
                     parsers, scanners, symbol tables

running and compilation  -- easiest and best way to see compilers
                    assemblers and data structures
		    debugers, editors, and on-line systems


efficiency -- grubs of alternative implementations
                 pointer manipulations, threading, double linking
                 string processors

theory  -- denotational semantics,  proving correctness
              mathematical methods and their importnace

applications  -- a.i., t.p., interactive programming 
                    pattern matching, PLANNER and CONNIVER
                    game playing, program manipulators

implications for language design  -- how can we improve

political, philosophical, bull shit -- one man's phil. is anothers b.s.


what does all this have to do with course on data structures?

show relationship ; go back and underline instances of d.s.
taken in context the ideas are quite easy; out of context they are
rather sterile.


MECHANICS OF LISP

attempt to blend theory with practice. we will not persue theory just for
the shear preversity, neither shall we ignore rigor when it will improve
our perception.


	BASICS

formal language
	domain:symbolic expressions
	operations

domain atomic   
       BNF almost NO syntax here; very D-U-L-L
	non-atomic:dotted pairs

atoms
	literal atoms=uppercase identifiers
	numbers   (uninteresting)

non-atomic: dotted pairs


examples.

interpretation as binary trees; easy.

functions:  domain, range and graph(set of ordered pairs)
primitive:

	cons[x;y]  total;forms dottedpair; called constructor
	car,cdr    partial; first or second; called selectors.

composition allowed; abbreviation of car-cdr chains

informal defintion facility: <=

note: good LISP functions will seldom use explicit car's and  cdr's.
we will begin dicussing esthetics next time.

not very exciting so far. need conditional

[p1→e1; ....]  semantics

obviously need predicates; 
LISP maps true to T and false to NIL thus predicates have values in domain
not truth values as normally understood in math; thus can compose

really should call LISP "predicates" something else: propositional forms

o.w. condition

two primitive: atom and eq
	atom	total
	eq	only for atoms

example:fact
	equal 
	subst
	fact with aux function

compare computation and mathematics
  order of evaluation, undefined and non-terminating computations

examples: 
car[cons[x;y]] = x ?; no since y may not terminate or may be undefined

[0=0 → 1; T → 1/0]

f[x;y] <= y   if we insist on evaluating x we may lose "unnecessarily"
eg f[car[A];...]



#2# jan 30
Review and examples

*****has everyone seen differentiation of polynomials?*****

 king kong
 take home exams
 machine probelms in about a month
 additional papers
    errata
    bibliographical material--some kind of library (beware of rip-off)
    general h.s.
    much latter missing sections
office hours 216 1-3 t-th

REVIEW
 Why high level: high level is mostly notational convenience
 abstraction 
   clarity: close to our informal formulation
    =>easier to debug
    =>less dependent on specific immplementation ****very important!!!!!
    => easier to write more complex progs.
    => the reasoning we did to convince ourselves of the correctness of the
        informal algorithm will carry over ususally. (except approxmation)

interested in WHAT is computed rather than HOW?

problems in current c.s. are correctness not efficiency.

why LISP -- see as we go along
 if you know weissmaan----forget it  for now

motivation: efficiency after conceptualization
          where we will see "real" data structures.


basics of language a domain - sexprs ; and functions

sexprs binary trees and dotted pairs ---both are representations
binary trees, NO intersections

perhaps BNF for atoms and  dotted pairs.

    cf. 2  in roman, binary, etc...

examples

functions  basic characteristics of d.s. functions: constructors selectors
and recognizers 
  constructor: cons
  selectors:  car,cdr  tell why
  recognizer: atom
  predicate: eq   (=)

composition of functions

<= intuitive definition

do simple evaluation car[cdr[(A . (B . NIL))]
  problems of evaluation, computation and mathemaatics
   car[cons[x;y]] = x
   f[x;y] <= y

things can get worse with conditionals
 review semantics

do f[x;y] <= [g[x] → 1; T→ 1]

do example problems
 fact
*** terminology: 
      recursive function
      general case and termination: c.f. induction
      use diff as example
 ***see any goto's?***

 equal
 subst


#3# feb 4
LIST NOTATION AND ABSTRACTION

Today we will start putting things together. so far we have just seen
a rather start outline of a programming language.  What does this have
to do with data structures?  What ARE data structures? 

What we will do is attempt to  characterize a common mathematical
structure, (sequences) as a dtat structure. This investigation will
give us a handle on answering questions like:

1. what is a d.s.
2. how do you characterize one
3. the power of abstraction
4. top down programming, and structured programming --whatever that means.

Note first that this is not simply a sterile exercise in mathematical 
chicanery. Sequences are frequently used in numerical problems as arrays
and will appear in   the majority of our non-numerical work.

For example
frequently in c.s. we find it convenient to deal with sequences.
  examples
	(x+y)+(x+z) => 2x+y+z
 add[(+,x,y) (+,x,z)] => (+,(+,2x,y),z)

Recall that we have made a beginning in to  discussion of sexprs, by noting
the characterization in terms of construtors, selectors, and recognizers. 

constructor: list
selectors:   first, second, nth
recognizer:  null
             perhaps  islist as additional recognizer: typing
predicate: =
constant:    ()

now lets start using these characterizations and see if they are
sufficient and/or most natural.


example length
        append
        fact1
	reverse
	do length1

length[x] <= [null[x] → 0; T → plus[1;length[????   : rest[x]

note that we can define second...nth in terms of first and rest.

try a list-constructing function, append

append[x;y] <= [null[x]→ y; T → ???    : concat[x;y]

note that we can defin (almost) list in terms of concat.

append[x;y] <= [null[x]→y; T →concat[first[x];append[rest[x];y]] ]

question of representation: how can we reconcile our
neat usage of list-manipulating fucntions and predicates
with sexprs??  problem of representation

1. rep for lists as sexprs
       do BOX NOTATION as embellishment of binary trees  
	notation for NIL
2. implied rep of functions and predicates  as sexpr manipulators
3. i/o conventions

#4# feb 6
Cleanup and review

pre-note: things must be intuitive: theory in cs. should be
used like garlic in cooking.

as things currently stand, you should have a reasonably complete
picture of the world of data structures. In a phrase, almost devoid
of any meaning, it's the problem of representation.
Just as mathematics  maps real-world problems onto numerical theories
computer science maps problems onto "theories" over data structures.

homely parallel: machine language in d.s. and roman numerals in mathematics

For example we will soon show the the problems of evaluation, compilers
language implementation are all amenable to this technique. Indeed the
question of representation is quite relative; we will show how
the promitives of LISP (in particular) are easily mapped onto lower-level
representations; i.e. the storage structures. 

Once we understand the higher-level representation problems, the
questions of storage and relative efficiency are quite  approachable.


we made a start at discovering data structures; note that what is
a data structures at one level is program at another. this the
distinction between prog and data is not as transparent as one would like.

note too that constructors, selectors and recognizers, even with 
the I/O conventions, are not the most satisfactory solution. Frequently
we want more control. I.e. there will be cases in which we will want
operations only applicable to specific d.s. rather than applicable in general.
look at attempt to characterize "finite set". look at = for set and seq and bag.

I.e. we want to be able to program at the higher-level of list and sequence
user-defined data structures.


what are the components of programming languages: data structures;
operations(procedures); and control structures( if-then; while;do; go to;
..., and recursion)
What's different about recursion?   It's implicit rather than explicit.
or its semantic rather that syntactic.(?) 

What can we say  about this high-level data structure approach?
this is  what top  down programming-  abstract data structures,  structured
programming  is all  about..  it has  very little  to  do with  goto's. its
programming style!  structured programING (sic). 
also note that if we change representations, then all we need change is
the const, sel, and recog.

homework, problems, and graduate students.

review and do examples: 
graphing   dotted pair <=> binary tree
	   list <=> binary tree

functions and evaluation
    evaluation should be reasonably intuitive; an extension of
things like "evalute x+y/z when x=2  y=3 z=4" or
            "evalute f[2;3;4] when f[x;y;z]=x+y/z "

only now things are a bit more complicated. The complication
is most apparent during recursive evalaution. It can be assuaged
by making explicit substitution, rather than trying to remember current
values.


The real sticky business finally comes when we have to write our
own functions. But who says programming is easy?
LISP at least makes your decisions easy: there is ONLY one way to
write a non-trivial function: recursion! The arguments go like
induction: find the right induction hypothesis and win; find the right
recursion  computation and win. Its easier to begin with unary
functions; you don't  have to decide what to recur on.
Then you must think of the structure on the argument

sexpr: atom or dotted pair
list:  null or non-empty

examples: we will stay away from "meaty" example today; next thing start 
doing applications to "real" problems. First must get the mechanisms straight.


 member
 times over plus
#5# feb 11
More examples:
 gcd?

do some involving construction:
pairlis is over sexprs and lists
assoc


try reverse

reverse[x] <= [null[x] → (); T → append[reverse[rest[x]];list[first[x]]]

if you look at  a typical computation sequence involved in an evalaution
you will be appalled at the inefficiency.  The  intuitive computation is
quite simple:


x		y
(A B C D)	()
(B C D)		(A)
(C D)		(B A)
(D)		(C B A)
()		(D C B A)

Certainly, with the typical language's assignment statement it would
be obvious to do.  The subset of LISP which we currently have doesn't have
assignment. There is one obvious answer, but it is valuable to exmine
carefully the power of the tools which we currently have.

look at factorial again.

n! = n*[n-1*[ ....]...]

we need a separate variable to hold the on-going computation
fac[n] <= fac*[n;1]

fac*[n;m] <= [n=0 →m; T → fac*[n-1; m*n]


try separate trick for reverse since the computational effect is similar.

rev[x] <= rev*[ x;()]
rev*[x;y] <= [null[x] → y; T → rev*[rest[x];cons[first[x];y]]


compare rev and fac
f[x] <= f*[ x;G[] ]
f*[x;y] <= [p[x] → g[y]; T → f*[s1[x];c1[s2[x];h[y]]]

relation between append and reverse.


??box notation???
********************************************
      nice meaty examples
do example of differentiation.

importance
	shows informal alg which is naturally recursive
	  note if a process even hints of being mechanical
	     it can be programmed

        show mapping of domain onto rep as lists.
	shows mapping of intuition to prog (spec) language
	show lisp power
	will show how to take into abstract algorithm
	 do selectors: op, arg1 arg1
	    recognizers: isprod, issum, isconst, isvar
	    constructors: makesum, makeprod

	do example: x*y +x  wrt x
#6# feb 13
relationship between abstraction and structured programming, stepwise-
refinement

show pic of trees and fns in parallel.



why numerical examples are not good: essence of problem has been
squeezed into numerical relationships, all structuing is gone; its
implicit rather than explicit.
thinking this way is inefficient?: bull shit! learn to make machines

*************************************
************** DO tgmoaf ************
*************************************

start value
#7# feb 18
value function for polys.
 constants, sums and prods
 x+y where x=2, y=3: tables
#8# feb 20

 more value
 f[2;3] where f[x;y] <= x+y

value[e;tbl] <= 
	[isvar[e] → lookup[e;tbl]
	 isconst[e] → e;
	 isfun_args[e] → apply[fun[e];eval_args[args[e];tbl];tbl];
         ]

lookup[x;tbl] <=
	[var[first[tbl]] = x  → val[first[tbl]];
	 T → lookup[x;rest[tbl]]
	]

eval_args[args;tbl] <=
	[null[args] →();
	 T → concat[value[first[args];tbl]; eval_args[rest[args];tbl]]
	]

apply[fn;eargs;tbl] <=
	[is_plus[fn] → +[arg1[eargs];arg2[eargs]]
	 is_prod[fn] → *[arg1[eargs];arg2[eargs]]

	 ... <other  known functions> ...

	 T → value[body[lookup[fn;tbl]]; new_tbl[vars[lookup[fn;tbl]];eargs;tbl] ]
	]

new_tbl[vars;vals;tbl] <=
	[null[vars] → tbl;
	 T → concat[mkent[first[vars];first[vals]];
	            new_tbl[rest[vars];rest[vals];tbl] 
	           ]
	]

For cambridge polish representation the following const., sel., recog. suffice:

recognizers:
isvar[x] <= [atom[x] → [numberp[x] → NIL; T → T]; T → NIL]
isconst[x] <= numberp[x]
is_plus[x] <= eq[first[x];PLUS]
is_prod[x] <= eq[first[x];TIMES]

selectors:
fun[x] <= first[x]
args[x] <= rest[x]

  (assuming the table is represented as a list of dotted pairs:)
var[x] <= car[x]
val[x] <= cdr[x]

arg1[x] <= first[x]
arg2[x] <= second[x]

  (assuming function definitions are stored (F .((X Y)(PLUS X Y)))
body[x] <= second[x]
vars[x] <= first[x]

mkent[x;y] <= cons[x;y]


comments
test march 4; one week

note cleandliness of definition 
   assumptions are more easily seen: call by value order of evaluation..
   machine independent
    no car cdrs.
  
if you knew about a specific machine, you could very easily transform
value into a compiler
   difference between compiler and evaluator
   why compilers are a hack
   why syntax is a hack

on to tgm's
#9# Feb25	 evaluation


review of deriv and representation

intuitive sketch
	constants
	vars and environment
	function calls: decision cbv and l-r
	 how do we find name
	conditionals

 example rev1[x,y] <= [null[x] → y; T → rev1[cdr[x];cons[car[x];y]]]
 	    for rev1[(A B C)]


 note recursiveness of evaluation
 cf. diff example
 look at tgmoaf

 do mapping of LISP expressions≡forms to sexprs

 start eval
	question: how to find vars and functions
  symb. tabs and assoc for lookup; cons for entry
    form-valued vars.

start apply
	how isfun recognizes: (FN ...) but compare (COND ...)
        special forms: add evcond

var is function; what is f[x;y]: compare cons[A;B] and cons [x;y]

use f[x;y] <= x+y

 what about founctions? the λ-calculus
  λ-calc
  terminology λ-vars,λ-list, body
  syntax-pragmatics-semantics


<= looks like assignment? or does it?
	DBA


finish basic eval-apply

what about defns: label
   label works as assignment  in LISP, but ok.
  problems of globals
   disallow; lose very big.
   take previous val;lose:fact
   take value when applied;lose funarg
   fact-foo DBA example again
x=1 vs. x←1 
= as predicate and as equation x=x↑2 + 6
     fixpoint

 functions get messy: get notation: weizenbbaum B-W.

functional args: what does it mean?    foo[x,y] <= x[y]
 rep as sexpr
   functional args passed in
   functions as values
 
but label is good for lisp
 adding functions

adding other things: special forms; and[x1,    xn]
   point to implementations and p-lists

scott: parallel sit to λ-calc 'til 1969
  for lisp
    why: over-specify
         precision
         therefore mathematics available for proofs of properties

on (yecch) machine
  define and evalquote

		LECT 3:implementation --static


rewiew eval with implementation in mind

eval constants and conditionals

apply priitives, lambdas and fn names

question why not handle cond inside apply
sppecial forms: another example, "and"

two kinds of evaluation--extend to user

another problem: definitions go away  eval[e, env]
two solutions start with BIG env';  or hand code then into eval and apply

bindings
	globals: dynamic binding; look at structure of alist
can't outlaw; c.f. fact; different solution: fix-point

	functional args
		coming in
		going out
	implementation
		funarg hack: lisp was first
	       (FUNARG fn st)

		do weizen examples



implementation
   separate out symbol table: eval[e]
   different properties expr, fexpr
   efficient

box notation NIL and atom header

properties: subr fsubr expr fexpr value
  attribute-val pairs

atom header thus atom[x]

search: GET; add PUTPROP

unique atoms (via  read) thus eq[x;y]


what about printing (A.B)  thus PNAME

worry about bindign after we get machine set up.

what about cons: free space and full word space

gc: base pointers: symbol table and partial comp


basic structure of lisp mchinne; hs. about inst and data

read eval print loop

details or read-ratom , parser,scanner

hashing, oblist,buckets ***if time***

ratom-unique atom


do (CONS (QUOTE A)(QUOTE B))

now finally, bindings

in apply 
   islambda => save old, bind new, eval body, restore old, buzz off

recall assoc

look at value cell...special push-down , mscw
advantages, fast save and restore
disadvantages, funargs

recall old

now at defn save old sp
at apply restore to state at old sp, while savving current
     bind locals and go, at end, rebind flushing to st at entry.
 

sigh

bootstapping

enxt time compilers, and machines

wed??? implicarrions and scott

		LECT  4: compilation and dynamics

      **************************
*****tell about problem on p149.******

put evlis and evcon on the board
     ***************************


REVIEW
	p-lists
	values-cell and binding
	funargs
	structure of new eval.. notice now look first at fn before args
	 indivators expr fexpr value macro
	    do "and"

compilers, machines and implementation of recursion
 what is it: relation to interpretation; nothing conceptual
 why compile: speed + extra sexpr space..conceptually no better
 bootstrapping
   structure compiler: generators
	machine to machine
	lang to extension
 binary program space (hexidecimal)

LISP comilers don't hack syntax
compilers in tgm-steps 
 functions composition and constant args
   refer to evlis
   calling conventions: stack+ac1-acn
     value returned convention
     comp,push, comp, push ..... loadac[m]
   do it; MUST since crucial to following var arg!!
   integrity of stack

 add conditionals
   refer to evcond
   linearize cond tree
   start comcond[((p1 e1) ...(pn en))] 
	convention on truth,jumpe;gensym
   write comcond; and add recognizer in compexp
    one pass assmeblers
	fixups and forward refs

 add local variables
  consider λ[x,y,z] ....x.....y.....z
   conventions says @call v(x), v(y), v(z) in ac1,2,3; save 'em;
     partial comp changes t.o.s.
       who? complis.
     good example j[x y] <= f[g[x];h[y]]
   convention: ...-n P)
   offset + rel pos
     lookup is  offset+cdr[assoc...]
  conventions: λ[x y z] => prup, PUSH, PUSH, ...
	       clean-up stack and POPJ


 add globals shallow and deep
 stack won't work as is: only value is saved on stack;special cell
  deep as in interlisp; save name-value; like a-list; compiler can be smart
    stupid interpreter; stupid interlisp

compiling other parts of lisp: fexprs, macros. functional args. progs.

debuggers and editors
  tracing: hide defn; replace with tracer; similar monitor on assignment


what should LISP have:
documentation: damn tired of people reinventing McCarthy's ideas
user-defined ds.
structuring s.t. not only an impl. in the lang but THE impl in the lang
sequences, structures, pointers
abstract control
better closures
form vars

		LECT 5: reflections...In a pig's eye!!!

to: pdp-11 hackers: look at LISP as if you were bootstrappint to 11
  don't look at all the details.


reflections
remarks on short coourse and teaching
students, particularly grad students should be stired up
teachers are not dispensers of wisdom in this field


structured programming
 progrm is not, programiing should be.
   structuted apl(NO WAY), cf siftup (BOO!)

what is interesting
  syntax isn't
  compilation isn't
 compilers, syntax analysis and pl1 have done much to retart  c.s.

semantics is
structure of language system
 problems in programming are not in efficiency but in correctness!!!!!!!


people gasp at represetnation of lisp-like inefficiency, but are
conditioned to long programming development and debugging time, buggy
programs and crap.   even if machines were made ten-times faster
debugginh problem would not go away,(but LISp interpretation
would be acceptable)

required: n. nec fol proofs but proofs  with "conviction".

COMPARE AI AND THEORY: mismatch between titles and content of papers
  ai: program which plays chess => 4x4 board with two pieces
 theory: program verification system for pascal-lisp-pl1

all languages are bad: some are worse than others

waht is lisp-like lang; why is it better/different.
why is LISP still around? very strange reasons given...

apg: 1)formal; 2) guess what i'm thinking

what is feasible, desireable, useful...
  not mind reader
  not knowledge of programming area
    programs which try to be smart lose very big.
  know about programming, types, ...
    competent assitant is reasonable

semantics
 operational
 axiomatic
 denotational

language: syntax pragmatics and semantics

benefits of denotational modeling
 precision => provability
 don't overspecify cf. order in cond


correctness of compilers and evaluation
  complier correctness is different from prog correctness
equivalence-Boyer Moore
manipulation: darlington
lcf and Scott
 cf. axions+ corrrectness, completeness proofs
     semantic equations, existence of model

mal
 correctness of compilers and interpreters for lisp subsets
 axioms for fromal manipulation of lisp progrs.

mg
 scott for lisp
 semantic equations
 model
 rules of imference

tennent
 good popularizer, QUEST

milne
 attempts to analyze "realistic" impple. inreadible

wadsworth
 analysis of λ-cal

implications
el1 and s-lisp
planner
 calling by pattern
 data bases

conniver
 add -- if-added
 remove  -- if-removed
 fetch -- if-needed  try-next and possibilities lists
    make prog look like data
 control and access environs
  adieu and au-revoir

 contexts for d.b.'s


actors and control???

list of papers to read
humble programmer
three computer cultures

authors, wegner, scott, reynolds,
may 6 globals and big pic.

review
 show code for globals
 comment on efficiency: better ways
     basic frame and extension(b-w)
 code for eval

why not compile everything??
 1. cons-up form; not important
 2. programming env:  is important: TWITY not DWIM
  specify and construct   
  modify    list structure editing
  debug      tracing; breaking; call-pushj
  verify     proofs of equiv and correct
   apg
   why prove anything? do fact, f(a1,...an) ?=? eval[(F A1 ....AN)]
     what is eval? how do you specify lang?
      hoare
      boyer-moore
      darlington
      low
  compile
    correctness of compiler?
  run
     first 5 out of 6 really involve looking at prog as data

give description of how to program

 discussion importance of verification ( wright bros.)

 discuss way should be done: school of computation; fuck ecpi
  compare med school.
lisp: it was the best of lang, it was the worst of lang
   should not be implemented except as a teachiing tool in university
   we must do better in practical langs.

start efficiency
 1.of storage structures: strings arrays double linking pointers

 2.of language: look at lambda binding (1 at a time) seq and struct
   a.d.s. recursion and iteration.
   the interpreter in the lang (not a)

data bases and a.i. lang.
may 8  more bull
grass

harrangue on inefficiency
whats inefficient?
doing things twice ; coding forms and keypunching.
working with inaccurrate data: memory dumps. (tracing such that must predict
  error paths)
useless repetition: compiling program to debug it.


harangue on "practicality"

1. PUB is not practical; but ....
2. stacks weren't procatical
3. architecture and mcdonalds

bio

scholl of computation ecpi≡witch dctors

on professionalism: why? someways (false?!)sense of importance. but more
importantly  improvement. whu professional medicine? quackkery can have  
devistating consequences. similarly....gutherie


start over day 1 page 1; 

a.i. ideas: why ? past experience. time sharing; display-based; stacks;
hashing, list structure; int. prog and debugging(e and raid), symbol
manipulation(reduce macsyma); bootstrapping compiler.


thurs		tues		thurs
haran		string gc	threading, double linking, sequential
eff of s.s.   	 compact	dynamic alloc
arrays		pointers	semantics
strings		structures	(funargs and processes??)
		assemblers

***iteration***???
may 13
10 to fill out; qual; alchemy/chemistry = current/c.s.

strings
 gc review
  note intersections in lisp structure

string g.c.
 compaction

compacting gc for lisp


pointers
 note: old question: why gc and not ref count. => pointer hackery

copy vs. share
 do append-nconc
   side effects.

rplaca rplacd

show applications
  in ratom
  in assembler

may 15
other storage regimes
 knuth's kompendium


single linking
  applications
   insertion and deletion
   assemblers


double linking
  spine


look at traversal: in LISP we do it by recursion, but if you think about the
  run-time behavior of the machine code, its done with a stack of pointers
   to save pointers (or returns) in the d.s.
threading
 substructure
 notice that for a specific order of visitation the dynamic state of the stack
   is always the same.
 link-bending g.c.


dynamic allocation
 symbol table blocks
  not really stack
   memory or disc management


sequential allocation
  different strokes

arrays and records

stacks, queues, and deques

go read knuth

semantics
 funargs and processes

review

questionaires
cars
almost anyone can (and does!!) learn to drive at some mininal level
its difficult to become skilled driver
if you want to be taught how to fix cars you go to trade school
if you want to learn how to build/design them you go to college

programming
almost anyone can (and does) learn to program at some minimal level
it is difficult to become a good programmer
if you want to  become a hacker, go to condie college
if you want to learn how to build/design languages go to college
organized bull

comments on teaching short course
  high level vs. details --picked  h.l. because that's interesting
  lack of time for questions  -- very bad
    cf. (F A1 A2  ... AN)
          1) ignore
	  2) "not the way it's done anyway

  missed parts
   efficiency  applications a-i languages


what is wrong with lisp
 1) NOT syntax
 2) closures not done well
 3) d.s. weak
     a) for user
     b) for LISP i.e. implementation of THE interpreter ont AN int

what is lisp-like lang
 1) interpreter based
 2) which works on NATURAL rep of prog.

clearly other languages can effect the map-- all are disgustingly the same.
but lisp did it interntionally


why it's good for you
 software problem: get programs which work. demonstrably correct
  problem is conceptual not technological
 l.s system
   do NOT want "smart" system. pain in ass
   do NOT want "mind-reader" system. pain in ass
     invaribly "stupid".
   structuring of dtructured programming occurs in construction
    cf. siftup
     what is structured programming. (sic)ing. 
      easier to say what it is NOT. not block structure, not APL
  
 system components language; command language to manipualtee progs
   editor debugger, verifier, interpreter, compiler

 how to make lisp better
   user defined data stuctures
    what is d.s.? dunno. again what it is not c.f. actors and DCL
      more that stupid program...conceptually different
        c.f. everything is atom and driving to LA.
         anicdote on CDR and DBA
     like type...abstract syntax
        abstract control....cf. set var
         operations must map somehow to ppl.

system must be able to manipulate programs to tell if correct.
 semantics
  axiomatic
   axioms and rules of inference, reduction, simplification rules
    hoare
    mg for LISP
    λ-calc
     order of evaluation built into LISP but not into λ-calc
      normal forms in λ-cal  
       a) not everything has fn., but if it does outside-in cbn gets it
       b) things with distinct nf are distinct ∃ args st. applying gets
          distinct objs
       c) not true for non nf. ∃ distinct exprs which under natural interpretation
           are same...how do you prove such?

  operational
    in terms of machine
     TM ...boo
     compiler...boo
     vdl...sign
     lisp ....

  denotational
    langueage = syntax+ pragmatics+semantics
      cf. λ-calc
      proc is just math function
      
 mg.
denotational model for lisp
reduction rules for lisp
  red(<e,a>) =A <=> e(a)=A       i.e. e is expression. it is a function which when
applied to arg ( env) gives value(denotaion)

also eval(e*,a*) = e(a)

mtc notes

feature of scottery: read it; go away; come back;
"ah, ha !!"     it works.

so do it
********
three schools of semantics of pl.'s
operational
denotational
axiomatic

back to fact:
1. algorithm => need rule of computation at least 2 cbn, cbv
2. as an equation which may or may not have solns.
    (relate sol to the function computed by rule?)

as equation fact = T(fact)

solution is called fixed-point

T(U), T(T(U)), .....


**********
h.h.s about λ-calculus

syntax 

exp ::= λ[[id] exp] | exp[exp] | id

pragmatics
α- conversion
β-reduction
∂-rules

semantics
type-free theory of functions ( or is it)


normal forms: 
λ[[x] x[x]] [λ[[x] x[x]]] no nf

questions:
1. is it possible for two reduction sequences to terminate with different
results?  no: church rosser

2.note that some red-seq term and some don't. it is possible to
pick a "canonical" one such that if it terminates then this will do it.
yes: standardization thm. (left-most)

3. are distinct nf.'s "different"
   what's different  => extensionality
   what's distinct => α-con
yes

4. what about non-nf's?  we extensionally indistinguishable but
not inter reducible. what do you say. if you`re scott and park
you say build a model and show that they are = as functions in model.

note: Y1=Y2 true; Y1≡Y2 not provable in our formalism.

5. more  questions: note  above with  no nf.  its a self  applicative
function  f[f]; what  does that  do for  you? it  says that  model of
λ-calc must reflect such funny business. 

6. What does this shit have to do with pl's.  what is a p.l.?
how do you specify a p.l.? LISP's eval: what does it specify?--LISP?

what about cons[A,B,C]? what about  errors?

look at eval[ "f[a1, ..., an]"] = f[a1, .... an]

lhs is operational; rhs is denotational
how do you prove such? make a model of LISP.


7. why care?  want to be able to prove things about your programs? debugging
just isn't good enough.

8. what kinds of things do you want to prove? correctness, equivalence,
termination.

examples of proofs.

correctness of compilers:
compiler is mapping of source to target. indeed a homomorphism
look at regularity of sdt.

sl →→→→→→compile→→→→→→ tl
↓			↓
↓			↓
s. sem		    t. sem.
↓			↓
↓			↓
s mean ←←←←←decode←←← t. mean

corners are algebras, mappings are homomorphisms

london intuitive correctness; newey formal proof in lcf
morris algebraic results

equivalence append[reverse   
   by induction on the structure of the arg.
   termination on the structure of the computation; well-founddness

correctness of a different sort;
hoare's rules
 p{Q}r

p∧s{B}s       p∧¬s⊃q
____________________
p{while s do B}q

ligler reconcile axiomatic and denotational

proposal
comments:
1. concentrate on what i do in class; use other stuff when bored, 
but DO IT!


sched: 
th: λ-cal syntactic
tu-th- models; applications; typed λ-cal and model theory
tu: construction of models
solvability
approx normal forms

projects SEC+D+M for λ-cal and LISP

formal systems
axioms
rules of inference
concept of deducibility and provability

def terms
there is NO good notation for λ-calc.

<term> ::= <var>| (<term><term>) | (λ<var>.<term>)
see CW.

calculus is atttempting to be formalism for type-free theory of functions and
application. type-free means no restriction on application; (f f) is legal.

examples of substitution
a correct definiton of substitution requires care (see HW-1), however

here are some examples which show the pitfalls.

(  (λx. (x (λy.y)))  (λz.(z a))
(  (λx. (x (λx.x)))  (λz.(z a))

(  ((λx.(λw. (w x)) y)  z)
(  ((λx.(λy. (y x)) y)  z)


(  (λz.(z a)) (λy.y)))
    ((λy.y)a)
        a

(  (λw. (w y))   z)
      (z y)


the examples also show how foul the notation is; thank god there's no market
for λ-cal programmers!!!!!


contexts:

examples:

if     C[] ≡ (λx.(x(y[]))  then
       C[(z y)] ≡ (λx.(x(y(z y))).

contexts make it easier to state results about all terms:

α-conversion C[λx.M] make be converted to C[λy.M'] M' is subst of y for x in M
with  of course proper def of substitution.


β-conversion
terminology: β-redex  is a ((λx.M)N)  
we can do β -expansions as well as reductions. usually intereseted in reductions.

eta-conversion  (λx.(Mx)) is eta-redex and if x is not free in M we can
contract to M.
(see homework)

normal forms  finally term is in normal form if it contains no redexes.



church-rosser: as i intimated last time there make be more than one red-seq
for a specific λ-expression; we would hope that if the sequences terminate,
they would terminate with same NF. (where termination means no redexes)
(and same means up to α-cnv).
the answer is yes and is a result of C-R: 

C-R: for ANY (not just terminating) expression U, if UredX and UredY
then there is a Z such that XredZ and YredZ.
The NF result follows immediately: UtermX and UtermY then X and Y have no
redexes; now look at Z of C-R. what can we do to get from X,Y to Z?
nothing but α-cnv. thus Xα-cnvY.



delta: delta-rules are an attempt to turn λ-cal into an honest language.
that is we attempt to capture the intemded interpretation of
constants by adding axioms (usually to be used as reduction rules)

we can (sigh) show how to construct λ-cal expressions which act like
the integers, conditional  expressions and all the other crap for
elementary number theory. (almost!!! we don't get induction this way
and therefor cannot prove nice general relations between expressions;
we'll fix that soon enough)


Y: is one of a sequence of fix-point operators. (fixpoint means solution
to equation so fix-point operator is equation-solver)
it acts like LISP's label; doesn't have normal form and is an anti-social
son of a bitch. 




standardization: it is easy to show that there different reduction strategies
some terminate, some don't. However it can be shown that the scheme of
always taking the left-most redex will terminate if any will, and with
C-R we are in reasonable shape.



head normal form

solvability



homework


1. write out a version of substitution

2. do dit p6. c. wadsworth

3. start expansion of (JA). (ho, ho, ho)

4.here's def of extensionality:

        (Mx) =(Nx); x not in FV(M) or FV(M)   => M=N

show extensionality => eta

5. show  eta without "x not in FV(M)" implies that for
ANY terms M and N we can  show M=N. (This would imply inconsistency of λ-cal.)

6.  how would you prove "delta-delta" of wadsworth(p.6) has no NF?

7. give an example of a λ-expression which has at least two red. sequences,
one terminates, one does not.

8.show λx.(fx) and f are extensionally equivalent.

9.using Y on page 6 of CW,  show that (Yf) and (f(Y f)) can be converted to
same exprssion.

10 (sloppy notation; fix it up).
Using   fact≡Y(λfλx[Zx →1; *[x f(pred x)]])
try to make sense of (fact 2). you will have to gen-up some delta rules
like:
    Z0 => T
    [T→M,N] => M
    [F→M,N] => N

etc.
Well this  seminar  was to  introduce you  to  the last  10 years  of
computer science. Instead,  I will first have to pull you kicking and
screaming into twentieth century mathematics. Previously I  have said
that what we  really wanted to look at was  the relationships between
truth, deduction,  and computation. We shold really step back further
and start with the human  mind-- you know, the thing that  keeps your
ears from falling in?--. 

There's a not too flattering  parallel between the  attitudes of many
practicing computer scientist and  their mathematical brethren.  Many
people use  programming languages --computation--  as a  tool without
further analysis.  They've got a problem, stuff it in the machine, it
worked, go away.  They do not wish to understand why  it worked; what
the limitations of  computation are, ... They do not  want to analyze
their  tools!  I find  that attitude  irritating.   It  is similar  to
people who dive their cars until they fall apart  and then bitch like
hell cause  "it don't woik no  more." This lack of  analysis of tools
carries over to much of mathematics. What are the tools here? Not  to
keep you in suspense,-- they're "proofs". How do you know when you've
proved something. What  are convincing proofs. What are valid proofs?
Assuming  you   can   give   a   reasonable   (what's   reasonable??)
characterization  of "proof"  can proof  checking be  mechanized; can
proof  discovery  be  mechanized?  Can YOU  be  replaced  by  a giant
ee-electronic brain?  It's job  security, baby, so  better know  your
enemy!! 

So we want to begin an analysis of human reasoning; in particular an
analysis of deduction. For example if I know: 

1) if you're bored with this seminar you go to San Diego
2) you're (surely) bored
3) We  can therefore safely  assume to see  you hauling  south almost
instantly. 

This  seems  logical  -  the  reasoning  anyway  That is,  under  the
assumption that 1) and 2) are  true, we can expect, or deduce 3).  If
we look at  the structure of the argument it  appears that regardless
of the components, the form of the argument is "convincing". That is:

1) "If A then B"  is true
2) "A" is true
3) therefore "B" is true.

This is  the beginnings of  an analysis  of informal reasoning.  More
sophisticated reasoning will follow.  Let's look more closely at what
we've got now. 

How much of this  can we formalize? Since  the style of the  argument
seems to  maintain validlity (a term  we will make more  precise in a
while)  and since the  reasoning seems mechanical, can  we describe a
mechanical process to carry out these "1-2-3" arguments?   Let's try:
first we  need a collection of  immutable (love that  word) truths to
use as  canon fodder  for  the "1-2-3"  rule; call  these  statements
"axioms"; then we  can rub these  axioms together using our  rule and
generate new  truths.  Now "truth" is a loaded  word, so let's try to
be a little more  precise and less truthful. Since  we are interested
in the power of  the arguments we're making and not in the form of the
components, let's make formalize the  1-2-3 rule, writing "if A  then
B" as "A⊃B"; and let's rewrite 1-2-3 as a kind of simplification rule
or rule of inference.

          A⊃B      A
          ___________
               B

This rule is to be used as follows: if  we have creatures of the form
A⊃B and of the form A in our collection, then we may add the creature
B.    Notice  the   style  of  manipulation  is  becoming   much  more
mechanical; we are only looking at  the structure and not the content
of the  argument.  But do not forget the origin of this mechanism. It
was distilled out of human reasoning. (Logic is not a game! either is
mathematics.  If you don't believe that read E.T. Bell's book.)

Formally we could equally well use the rule:

          A⊃B      B
          ___________
               A

But somehow when we interpret the meaning of the symbolism this second
rule would offend our sensibilities. Problem: find such an example.

Now to  put more meat  on this discussion.  What we are  beginning to
describe is called  a formal system. We have axioms: statements taken
as given;  we have  rules  of inference:  schemas describing  how  to
combine statements to get new statements.  The process of combination
is  called  derivation and  the output,  if you  wish,  of a  rule of
inference is called a deduction.  Statements  which are deducible are
called theorems; and a proof (of a theorem) is a derivation tree such
that the tip nodes of the tree are axioms, the intermediate nodes are
the results of  the application of a  rule of inference and  the root
node is the theorem. (shit! turn the tree upside down.)

There should  be  all kinds  of questions  popping up  in you  little
minds: how do  you know that the rule of inference characterizes that
piece of human reasoning which it is supposed to? given  a collection
of axioms and rules of inference, what can be said about the class of
things  which can be proved:  can I prove  everything (too much)- the
system is  inconsistent.   are  there things  which  I would  like to
prove, but can't (too little). 

Notice that  the process  of generating proofs  is easy: build  a GSS
(Giant Shit  Spreader) which  takes the  rules of  inference and  the
initial  set  of   axioms  and  generates  deductions,   feeding  the
deductions back into the initial set. Notice too that the problem "is
this  statement a  theorem"  is  much  more  difficult.  Formally  it
requires  the   explicit  demonstration  of  a   proof--  a  specific
derivation  tree--.   Perhaps  there's an  alternative;  it we  had a
mechanical device which would gobble up the problem and produce a yes
or no  answer, but perhaps without giving  an explicit proof: perhaps
that would satisfy us.  Such a device is called a decision procedure.
(notice I'm  trying  to ask  a lot  of questions  without giving  any
answers yet --that's intentional--you are supposed to think!)

We have  the beginnings,  but surely  mathematical reasoning  entails
much  more sophisticated  logic than  that  which we  have presented.
Before we go on to some rather interesting areas, we can  extract one
more interesting facet. (tap one  more faucet??, ugh!) Perhaps we can
go  after the question  of " is XXX  a theorem" by  going back to the
interpretation of the content of  XXX. That is, if the axioms  of our
formalism  are supposed  to  capture our  intuitions  about primitive
truths  and  our  rules  of   inference  are  supposed  to   preserve
(intuitive)truth,  then perhps  we  can "map  back"  our question  of
theorem-hood to a  question of truth and say "well since its truth in
my interpretation then it  must be provably,  even though I can't  be
bothered to  find an  explicit proof."  That's a  damn big  step. and
indeed much of the very intersting results in mathematicll logic deal
with exactly what's provable in formal systems and what's truth. 

outline
  strict
  monotone
  continuity

typed λ-cal and models
 why
  1.much simpler intro to modeling of lang
  2. will use it to build lcf
  3. will use it to build models of type-free λ-cal

intuitive description of strictness, monotonicity and continuity

consider an typical recursive definition  of a function, f.  deep in  its black heart
there must  be a function which can determine  its value without complete information
concerning the values  of all  of its arguments  --a "don't care  condition"--.   For
otherwise the little bugger won't terminate;  it will continue to attempt to evaluate
all arguments, ad nauseam. 

Functions which can determine their value w.o. complete information on all their arguments
are called  NON-STRICT.  The conditional function is such  a function.  Again, think about
the algorithm which is  computing the function.  Thus  cond(TRUE,q,r) has value q  without
knowing  anything  about  what  happens  to  r.     Or  writing  "d.c."  for  don't  care:
cond(TRUE,q,"d.c.") = q.  Likewise cond(FALSE,"d.c.",r) = r.  We must also notice that the
value of cond(TRUE,q,x) must  therefore in no way depend  on what value is assigned  to x.
I.e. cond is a function of the form:

f(a,"d.c.") = b => ∀dεD.f(a,d)=b. 

an aside: the class  of functions in which we  are interested are typically  partial.
(we're partial to partial).For  example the value of a function  may not be specified
for specific  arguments of the domain.  (car[A] is unspecified or <A, ... > is not in
the graph of "car".)

We will find it convenient to  extend the normal domain of individuals to  include an
element denoting "unspecified" to be used like "don't care".  We name that individual
∞ (since back-biting  keyboard doesn't  have "eet") Thus  our functions  will now  be
total over the  extended domain D∪{∞}.  It  is also convenient (and natural,  but not
too meaningful  yet) to impose a structure  on our domain. I.e., think  of it as more
that a set of elements. If for no other reason, ∞ is not used quite like the elements
of D: we can't say  things like " if f(a)=∞ then 1 else 2".   The structure we impose
is a partial order (reflexive, transitive, and anti-symmeric...patience, please; more
structure later) The partial order on  this domain is the trivial one: ∞ ≤  d for and
dεD  and elements  of D  are all  imcomparable.   Such a  domain is  called FLAT.   A
suggestive reading of "x≤y" is "y is an extension of x" or "y extends x". 

Then over our extended domain our statement above becomes:

f(a, ∞) = b => ∀dεD.f(a,d)=b. 

This implication is particular instance of the relation:

∀x,y(x≤y => g(x)≤g(y)) the function g is then called MONOTONIC. 

Now let's  look  at functional  operators.   Let  f  be monotonic  and  consider  the
computation: π(f)(d).    It may  or may  not terminate;  assume  that it  does.   Its
termination  cannot depend on computations  involving whether or not  values of f are
unspecified.  Thus if g is any function which extends f ( g(d)=f(d) dεD)  f then then
the computation of  π(g)(d) must equal π(f)(d),  and since g extends  f, π(g) extends
π(f). I.e. π(g)(d) might be specified where π(f)(d) isn't. 

f≤g => π(f)≤π(g). 
(notice that f≤g is determines a point-wise partial odor)
Finally consider a sequence of functions {fi} such that

f1≤f2≤ ... ≤fi≤... ( not necessarily infinite, but may be)

Such a  sequence is called a chain. Let f be  the least extension of all of the fi's.
I.e. fi≤f for all i; and if fi≤g for all i, we have f≤g. 

Consider the computation of π(f)(d); if it terminates then at  most a finite number of the
fi's have been used in the computation.  And in fact for some i π(f)(d) =π(fi)(d).  Indeed
this can be  done for any  d s.t. π(f)(d) terminates.   That says  that π(f) is  the least
extension of the set {π(fi)}. 

Writing "∪" as "least extension of" we have:

π(∪{fi}) = ∪{π(fi)}. This is the statement that π is CONTINUOUS. 


PROBLEMS

1.prove continuity=>monotonicity

In the next two problems  let D[0] be flat(int). I.e.   integers plus ∞ with  partial
order described above. 

2.prove that in D[1] (=D[0→0]) monotinicity => continunity

3. prove in D[1] that every chain has a lub. 

4. Let  D be: ∞→d1→d2→d3→ ...  →dn →...d∞ and  U be: ∞→T.  Define a monotone  but not
continuous function f:D→U. 
basic structure of LCF
 structure of language
     typed λ-cal with constants for conditionals, fix-points and induction
       note that we coud simulate in type free, but cf. 0 as λ-exp
        would allow 0(x) as expression even though under interpretation, its
        a loss.

  axioms and rules of inference
    do 'em.

  applications to programming languages
  example of D(id) from D(ind) and typing
  example of formulation of problem.

gordon
mike's crap

motivation:
 compare  s.a.e. 
we are able to comphehend  values of sae. without evaluation mechanism
indeed seldom even think about it. we assume a church-rosser theorem
that order of evalaution or reduction makes no difference.

but there is a simple computational machine for these expressins.


now what about lisp?
It is defined in terms of an evaluator. Mg's task was to supply a 
compatible denoational semantics. Wha's that mean. denotation of LISP
function is supposedly a mathematical function or maooing... but what
kind.
why bother. language specification; provable properties of programs written in
language. MIke is going further: instead of proving correctness
of each language, he's attmpting to isolate various assumptions
used in these kinds of proofs and then prove general theorems from
these asumptions. to apply the results one need only show that the
language in question satisfies the assumptions.
examples 	1. when is label and Y equivalent
		2. when will an expression evaluate to same val in two envrs.


now LISp has some troubles
krocky in spots, but hindsight is better than ...

call-by-value λ[[x]NIL][e] is NOT NIL since computation of e may f.u.
c.f. λ-calculus. compare strict functions.

a trivial problem cons[A;B;C] is (A . B). tsk, tsk, LISP. but this
shows problems of defining a language. How do you ever know what you
have specified unless you can prove properties!?!

More trouble is dynamic binding scheme. The "natural interpretation"
of label[f;fn] as fix-point is loss.


What is LISP talking aboout?

look at λ[[x]car[cdr[x]]]
1. denotational: its function composition over sexprs
2. operational: its a rule of computation; bind argument to x; evaluate cdr;
then evaluate car of result.

or apply[fn*;(a1*, ..., an*)] = fn[a1, ... an]

lhs is operational ::: rhs is denotational


what should we expect  from the  mathematical semantics

I. partial functions and forms: LISP functions denote partial mappings
LISP forms denote sexprs or are undefined.

II. value of f[e1, ... en] apply denotation of f to the values of ei,

III. the denotation of label[f;λ[[x1 ... xn] e]] is a mapping
F s.t. F(x1,   xn) = e.

given these expectations, try to formulate a semantics
similar to what we did for λ-cal, and LCF.
outline for mg no. 2

domains:
  why env = id→ d is loss
  why env = id→ [env ← d] wins

how to obtain solutions
  1. via scott env[i]
  2. retract of universal space

calculus
   show rules
   describe main theorem
   do proof that lisp funs are strict

talk about proof on main theorem
    wadsworth's technique...see later in λ-cal

extensions to quote and function

extensions to prog
  lisp golist
  continuations:  sequencing, error exits